Learning Dynamic Manipulation Skills under Unknown Dynamics with Guided Policy Search

نویسندگان

  • Sergey Levine
  • Pieter Abbeel
چکیده

Planning and trajectory optimization can readily be used for kinematic control of robotic manipulation. However, planning dynamic motor skills requires a detailed physical simulation, and some aspects of the task, such as contacts, are very difficult to simulate with enough accuracy for dynamic manipulation. Alternatively, manipulation skills can be learned from experience, allowing them to deftly exploit the dynamics of the real world. This is the approach taken in reinforcement learning [12, 14, 6], where a control policy is optimized using experience gathered directly with the robot. However, applying reinforcement learning to realistic robotics tasks typically requires a carefully engineered policy class with a modest number of parameters to make the learning task tractable [3]. Recently developed guided policy search methods can be used to learn general-purpose controllers represented by neural networks, without task-specific engineering, by using trajectory optimization to discover successful task executions [7, 8, 9]. These methods previously required a simulator in order to perform trajectory optimization, making them difficult to apply to robotic motor skill learning. We present a trajectory optimization algorithm suitable for use with guided policy search that does not require a known dynamics model or simulator. Our experimental evaluation shows that this approach can optimize manipulation trajectories that are extremely challenging for previous reinforcement learning methods. We also show that, combined with guided policy search, our method can learn complex policies for a simulated peg insertion task in a partially observed environment.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Neural Network Policies with Guided Policy Search under Unknown Dynamics

We present a policy search method that uses iteratively refitted local linear models to optimize trajectory distributions for large, continuous problems. These trajectory distributions can be used within the framework of guided policy search to learn policies with an arbitrary parameterization. Our method fits time-varying linear dynamics models to speed up learning, but does not rely on learni...

متن کامل

Policy Learning with Hypothesis based Local Action Selection

For robots to be effective in human environments, they should be capable of successful task execution in unstructured environments. Of these, many task oriented manipulation behaviors executed by robots rely on model based grasping strategies and model based strategies require accurate object detection and pose estimation. Both these tasks are hard in human environment, since human environments...

متن کامل

Optimal adaptive leader-follower consensus of linear multi-agent systems: Known and unknown dynamics

In this paper, the optimal adaptive leader-follower consensus of linear continuous time multi-agent systems is considered. The error dynamics of each player depends on its neighbors’ information. Detailed analysis of online optimal leader-follower consensus under known and unknown dynamics is presented. The introduced reinforcement learning-based algorithms learn online the approximate solution...

متن کامل

Guided Policy Search as Approximate Mirror Descent

Guided policy search algorithms can be used to optimize complex nonlinear policies, such as deep neural networks, without directly computing policy gradients in the high-dimensional parameter space. Instead, these methods use supervised learning to train the policy to mimic a “teacher” algorithm, such as a trajectory optimizer or a trajectory-centric reinforcement learning method. Guided policy...

متن کامل

Guided Policy Search via Approximate Mirror Descent

Guided policy search algorithms can be used to optimize complex nonlinear policies, such as deep neural networks, without directly computing policy gradients in the high-dimensional parameter space. Instead, these methods use supervised learning to train the policy to mimic a “teacher” algorithm, such as a trajectory optimizer or a trajectory-centric reinforcement learning method. Guided policy...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014